14 research outputs found

    On the tailoring of CAST-32A certification guidance to real COTS multicore architectures

    Get PDF
    The use of Commercial Off-The-Shelf (COTS) multicores in real-time industry is on the rise due to multicores' potential performance increase and energy reduction. Yet, the unpredictable impact on timing of contention in shared hardware resources challenges certification. Furthermore, most safety certification standards target single-core architectures and do not provide explicit guidance for multicore processors. Recently, however, CAST-32A has been presented providing guidance for software planning, development and verification in multicores. In this paper, from a theoretical level, we provide a detailed review of CAST-32A objectives and the difficulty of reaching them under current COTS multicore design trends; at experimental level, we assess the difficulties of the application of CAST-32A to a real multicore processor, the NXP P4080.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal grant RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Mitigating Software-Instrumentation Cache Effects in Measurement-Based Timing Analysis

    Get PDF
    Measurement-based timing analysis (MBTA) is often used to determine the timing behaviour of software programs embedded in safety-aware real-time systems deployed in various industrial domains including automotive and railway. MBTA methods rely on some form of instrumentation, either at hardware or software level, of the target program or fragments thereof to collect execution-time measurement data. A known drawback of software-level instrumentation is that instrumentation itself does affect the timing and functional behaviour of a program, resulting in the so-called probe effect: leaving the instrumentation code in the final executable can negatively affect average performance and could not be even admissible under stringent industrial qualification and certification standards; removing it before operation jeopardizes the results of timing analysis as the WCET estimates on the instrumented version of the program cannot be valid any more due, for example, to the timing effects incurred by different cache alignments. In this paper, we present a novel approach to mitigate the impact of instrumentation code on cache behaviour by reducing the instrumentation overhead while at the same time preserving and consolidating the results of timing analysis

    Software Time Reliability in the Presence of Cache Memories

    Get PDF
    The use of caches challenges measurement-based timing analysis (MBTA) in critical embedded systems. In the presence of caches, the worst-case timing behavior of a system heavily depends on how code and data are laid out in cache. Guaranteeing that test runs capture, and hence MBTA results are representative of, the worst-case conflictive cache layouts, is generally unaffordable for end users. The probabilistic variant of MBTA, MBPTA, exploits randomized caches and relieves the user from the burden of concocting layouts. In exchange, MBPTA requires the user to control the number of runs so that a solid probabilistic argument can be made about having captured the effect of worst-case cache conflicts during analysis. We present a computationally tractable Time-aware Address Conflict (TAC) mechanism that determines whether the impact of conflictive memory layouts is indeed captured in the MBPTA runs and prompts the user for more runs in case it is not.The research leading to these results has received funding from the European Community's FP7 [FP7/2007-2013] under the PROXIMA Project (www.proximaproject. eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015- 65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Improving Measurement-Based Timing Analysis through Randomisation and Probabilistic Analysis

    Get PDF
    The use of increasingly complex hardware and software platforms in response to the ever rising performance demands of modern real-time systems complicates the verification and validation of their timing behaviour, which form a time-and-effort-intensive step of system qualification or certification. In this paper we relate the current state of practice in measurement-based timing analysis, the predominant choice for industrial developers, to the proceedings of the PROXIMA project in that very field. We recall the difficulties that the shift towards more complex computing platforms causes in that regard. Then we discuss the probabilistic approach proposed by PROXIMA to overcome some of those limitations. We present the main principles behind the PROXIMA approach as well as the changes it requires at hardware or software level underneath the application. We also present the current status of the project against its overall goals, and highlight some of the principal confidence-building results achieved so far

    The transient tolerant time-triggered system-on-chip (4tsoc)

    No full text
    Zsfassung in dt. u. span. SpracheIntegrierte Architekturen stellen einen signifikanten Nutzen für eingebettete Systeme aus verschiedenen Anwendungsdomänen (z. B.Windenergie, Eisenbahn, Luftfahrtelektronik, etc.). Die Funktionalität, die bisher durch mehrere Einzelchips erreicht wurde, kann nun durch die Fortschritte der Silikonindustrie in einen einzigen Chip integriert werden. Dieser neue Ansatz bringt Kostenreduktion sowohl bezüglich der Anzahl der Komponenten als auch deren Verkabelung. Viele gängige integrierten Systeme benutzen einen Softwareansatz (i.e. einen Hypervisor), um mehrere virtuelle Ausführungsumgebungen auf einem Chip zu emulieren. Im Gegensatz dazu basiert die vorliegende Arbeit auf einem Hardwareansatz, genauer einem Multi-Processor System-on-Chip (MPSoC).Diese Hardwarelösung bietet Vorteile hinsichtlich Performanz, Energieeffizienz und Zuverlässigkeit gegenüber dem Softwareansatz.Diese Dissertation behandelt eine ,Flüchtige Fehler tolerierende zeitgesteuerte System-on-Chip Architektur', die als integrierte Architektur für sicherheitskritische eingebettete Systeme konzipiert ist. Als Teil der Architektur wird eine Replikation der Applikationskomponenten und die dazugehörige Systemkomponente vorgestellt. Im Weiteren wird ein Fehlermodell für MPSoCs erarbeitet, das die Architektur durch Gliederung in Fehlerbegrenzungen (,Fault Containment Regions (FCR)') und deren Replikation beherrscht. Zum Testen der Architektur wurde ein Fehlereinstreuungssystem (,Fault Injection for System-on-Chip (FI4SoC)') entwickelt, um aktuelle integrierte Architekturen zu testen (z.B. XtratuM, TTSoC) und Maßnahmen zur Härtung zu validieren.Die Arbeit schließt mit einer Betrachtung verschiedener Maßnahmen zur Verbesserung der Fehlertoleranz in einem zeitgesteuerten System-on-Chip und deren Anwendbarkeit in verschiedenen Anwendungsdomänen.Embedded systems of different application domains (offshore windmills, railway, avionic, etc.) can benefit from integrated architectures. The functionality that required several chips in the past can now be integrated in a single chip due to the recent advances on silicon technology miniaturization.This approach carries interesting economical benefits due to the reduction on cost of electronic components and interconnection.Most of the current integrated architectures have been implemented using a software approach (e.g., a hypervisor) in order to build the illusion of having several execution environment on monolithic processor chips.However, building the same architectures using a hardware approach, upon a Multi-Processor System-on-Chip (MPSoC), the system not only achieves a better performance, but more optimal power efficiency, and specially many advantages regarding reliability. In fact, high integration enables small transistor technologies, but brings more sensitive chips to energy variations which requires new fault tolerance measures to overcome the transient fault rates (e.g., soft-errors) that have significantly increased.This dissertation presents a Transient Tolerant Time-Triggered Systemon- Chip (4TSoC), an integrated architecture for safety-related embedded systems. For that, it proposes component replication for application dependent components and the system-component implementing the architecture.These different options conform a fault tolerance model for MPSoCs.Moreover, the mandatory fault-containment to make this replication approach work is presented and the measures to keep this feature at synthesis are described.A Fault Injection for System-on-Chip (FI4SoC) has been developed to test state-of-the-art integrated architectures (e.g., XtratuM, TTSoC) and validate 4TSoC hardening configurations. Finally, the most promising ones have been studied within several application domains.13

    Automotive safety concept definition for mixed-criticality integration on a COTS multicore

    No full text
    Mixed-criticality systems integrating applications subject to different safety assurance levels into the same multicore embedded platform can provide potential benefits in terms of performance, cost, size, weight, and power. In spite of this evidence, however, several hard challenges related to the safety certification of multicore approaches must be considered before endorsing their unrestrained adoption. This paper describes an ISO-26262 compliant safety concept for an automotive mixed-criticality case-study on top of a multicore platform. To this end, key aspects such as time and space partitioning are evaluated and enforced by means of hardware protection mechanisms

    Mitigating software-instrumentation cache effects in measurement-based timing analysis

    No full text
    Measurement-based timing analysis (MBTA) is often used to determine the timing behaviour of software programs embedded in safety-aware real-time systems deployed in various industrial domains including automotive and railway. MBTA methods rely on some form of instrumenta- tion, either at hardware or software level, of the target program or fragments thereof to collect execution-time measurement data. A known drawback of software-level instrumentation is that instrumentation itself does affect the timing and functional behaviour of a program, resulting in the so-called probe effect: leaving the instrumentation code in the final executable can negatively affect average performance and could not be even admissible under stringent industrial qualification and certification standards; removing it before operation jeopardizes the results of timing analysis as the WCET estimates on the instrumented version of the program cannot be valid any more due, for example, to the timing effects incurred by different cache alignments. In this paper, we present a novel approach to mitigate the impact of instrumentation code on cache behaviour by reducing the instrumentation overhead while at the same time preserving and consolidating the results of timing analysis.The research leading to these results has received funding from the European Community’s FP7 [FP7/2007-2013] under the PROXIMA Project (http://www.proxima-project.eu), grant agreement no. 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation (grant TIN2015-65316-P) and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal fellowship RYC-2013-14717.Peer Reviewe

    Software Time Reliability in the Presence of Cache Memories

    No full text
    The use of caches challenges measurement-based timing analysis (MBTA) in critical embedded systems. In the presence of caches, the worst-case timing behavior of a system heavily depends on how code and data are laid out in cache. Guaranteeing that test runs capture, and hence MBTA results are representative of, the worst-case conflictive cache layouts, is generally unaffordable for end users. The probabilistic variant of MBTA, MBPTA, exploits randomized caches and relieves the user from the burden of concocting layouts. In exchange, MBPTA requires the user to control the number of runs so that a solid probabilistic argument can be made about having captured the effect of worst-case cache conflicts during analysis. We present a computationally tractable Time-aware Address Conflict (TAC) mechanism that determines whether the impact of conflictive memory layouts is indeed captured in the MBPTA runs and prompts the user for more runs in case it is not.The research leading to these results has received funding from the European Community's FP7 [FP7/2007-2013] under the PROXIMA Project (www.proximaproject. eu), grant agreement no 611085. This work has also been partially supported by the Spanish Ministry of Science and Innovation under grant TIN2015- 65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer Reviewe

    WCET analysis methods: Pitfalls and challenges on their trustworthiness

    No full text
    In the last three decades a number of methods have been devised to find upper-bounds for the execution time of critical tasks in time-critical systems. Most of such methods aim to compute Worst-Case Execution Time (WCET) estimates, which can be used as trustworthy upper-bounds for the execution time that the analysed programs will ever take during operation. The range of analysis approaches used include static, measurement-based and probabilistic methods, as well as hybrid combinations of them. Each of those approaches delivers its results on the assumption that certain hypotheses hold on the timing behaviour of the system as well that the user is able to provide the needed input information. Often enough the trustworthiness of those methods is only adjudged on the basis of the soundness of the method itself. However, trustworthiness rests a great deal also on the viability of the assumptions that the method makes on the system and on the user's ability, and on the extent to which those assumptions hold in practice. This paper discusses the hypotheses on which the major state-of-the-art timing analyses methods rely, identifying pitfalls and challenges that cause uncertainty and reduce confidence on the computed WCET estimates. While identifying weaknesses, this paper does not wish to discredit any method but rather to increase awareness on their limitations and enable an informed selection of the technique that best fits the user needs
    corecore